Global optimization of objective functions represented by ReLU networks
نویسندگان
چکیده
Neural networks can learn complex, non-convex functions, and it is challenging to guarantee their correct behavior in safety-critical contexts. Many approaches exist find failures (e.g., adversarial examples), but these cannot the absence of failures. Verification algorithms address this need provide formal guarantees about a neural network by answering “yes or no” questions. For example, they answer whether violation exists within certain bounds. However, individual no" questions qualitative such as “what largest error bounds”; answers lie domain optimization. Therefore, we propose strategies extend existing verifiers perform optimization find: (i) most extreme failure given input region (ii) minimum perturbation required cause failure. A naive approach using bisection search with an off-the-shelf verifier results many expensive overlapping calls verifier. Instead, that tightly integrates process into verification procedure, achieving better runtime performance than approach. We evaluate our implemented extension Marabou, state-of-the-art verifier, compare its MIPVerify, optimization-based observe complementary between Marabou MIPVerify.
منابع مشابه
solution of security constrained unit commitment problem by a new multi-objective optimization method
چکیده-پخش بار بهینه به عنوان یکی از ابزار زیر بنایی برای تحلیل سیستم های قدرت پیچیده ،برای مدت طولانی مورد بررسی قرار گرفته است.پخش بار بهینه توابع هدف یک سیستم قدرت از جمله تابع هزینه سوخت ،آلودگی ،تلفات را بهینه می کند،و هم زمان قیود سیستم قدرت را نیز برآورده می کند.در کلی ترین حالتopf یک مساله بهینه سازی غیر خطی ،غیر محدب،مقیاس بزرگ،و ایستا می باشد که می تواند شامل متغیرهای کنترلی پیوسته و گ...
Optimal approximation of continuous functions by very deep ReLU networks
We prove that deep ReLU neural networks with conventional fully-connected architectures with W weights can approximate continuous ν-variate functions f with uniform error not exceeding aνωf (cνW −2/ν), where ωf is the modulus of continuity of f and aν , cν are some ν-dependent constants. This bound is tight. Our construction is inherently deep and nonlinear: the obtained approximation rate cann...
متن کاملPath-Normalized Optimization of Recurrent Neural Networks with ReLU Activations
We investigate the parameter-space geometry of recurrent neural networks (RNNs), and develop an adaptation of path-SGD optimization method, attuned to this geometry, that can learn plain RNNs with ReLU activations. On several datasets that require capturing long-term dependency structure, we show that path-SGD can significantly improve trainability of ReLU RNNs compared to RNNs trained with SGD...
متن کاملMulti-objective Grasshopper Optimization Algorithm based Reconfiguration of Distribution Networks
Network reconfiguration is a nonlinear optimization procedure which calculates a radial structure to optimize the power losses and improve the network reliability index while meeting practical constraints. In this paper, a multi-objective framework is proposed for optimal network reconfiguration with the objective functions of minimization of power losses and improvement of reliability index. T...
متن کاملApproximating Continuous Functions by ReLU Nets of Minimal Width
This article concerns the expressive power of depth in deep feed-forward neural nets with ReLU activations. Specifically, we answer the following question: for a fixed d ≥ 1, what is the minimal width w so that neural nets with ReLU activations, input dimension d, hidden layer widths at most w, and arbitrary depth can approximate any continuous function of d variables arbitrarily well. It turns...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Machine Learning
سال: 2021
ISSN: ['0885-6125', '1573-0565']
DOI: https://doi.org/10.1007/s10994-021-06050-2